42 research outputs found

    An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans

    Get PDF
    Abstract—Designing highly believable characters remains a major concern within digital games. Matching a chosen personality and other dramatic qualities to displayed behavior is an important part of improving overall believability. Gaze is a critical component of social exchanges and serves to make characters engaging or aloof, as well as to establish character’s role in a conversation. In this paper, we investigate the communication of status related social signals by means of a virtual human’s eye gaze. We constructed a cross-domain verbal-conceptual computational model of gaze for virtual humans to facilitate the display of social status. We describe the validation of the model’s parameters, including the length of eye contact and gazes, movement velocity, equilibrium response, and head and body posture. In a first set of studies, conducted on Amazon Mechanical Turk using prerecorded video clips of animated characters, we found statistically significant differences in how the characters’ status was rated based on the variation in social status. In a second step based on these empirical findings, we designed an interactive system that incorporates dynamic eye tracking and spoken dialog, along with real-time control of a virtual character. We evaluated the model using a presential, interactive scenario of a simulated hiring interview. Corroborating our previous finding, the interactive study yielded significant differences in perception of status were found (p = .046). Thus, we believe status is an important aspect of dramatic believability, and accordingly, this paper presents our social eye gaze model for realistic procedurally animated characters and shows its efficacy. Index Terms—procedural animation, believable characters, virtual human, gaze, social interaction, nonverbal behaviour, video game

    Simulink toolbox for real-time virtual character control

    Get PDF
    Building virtual humans is a task of formidable complexity. We believe that, especially when building agents that interact with biological humans in real-time over multiple sensorial channels, graphical, data flow oriented programming environments are the development tool of choice. In this paper, we describe a toolbox for the system control and block diagramming environment Simulink that supports the construction of virtual humans. Available blocks include sources for stochastic processes, utilities for coordinate transformation and messaging, as well as modules for controlling gaze and facial expressions

    An Architecture for Personality-based, Nonverbal Behavior in Affective Virtual Humanoid Character

    Get PDF
    As humans we perceive other humans as individually different based – amongst other things – on a consistent pattern of affect, cognition, and behavior. Here we propose a biologically and psychologically grounded cognitive architecture for the control of nonverbal behavior of a virtual humanoid character during dynamic interactions with human users. Key aspects of the internal states and overt behavior of the virtual character are modulated by high-level personality parameters derived from the scientific literature. The virtual character should behave naturally and consistently while responding dynamically to the environment's feedback. Our architecture strives to yield consistent patterns of behavior though personality traits that have a modulatory influence at different levels of the hierarchy. These factors affect on the one hand high-level components such as ‘emotional reactions’ and ‘coping behavior’, and on the other hand low-level parameters such as the ‘speed of movements and repetition of gestures. Psychological data models are used as a reference to create a map between personality factors and patterns of behavior. We present a novel hybrid computational model that combines the control of discrete behavior of the virtual character moving through states of the interaction with continuous updates of the emotional state of the virtual character depending on feedback from interactions with the environment. To develop and evaluate the hybrid model, a testing scenario is proposed that is based on a turn-taking interaction between a human participant and a 3D representation of the humanoid character. We believe that our work contributes to individualized, and ultimately more believable humanoid artifacts that can be deploy in a wide range of application scenarios

    When Agents Become Partners: A Review of the Role the Implicit Plays in the Interaction with Artificial Social Agents

    Get PDF
    The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication channels addresses these three aspects of the interaction process. To better understand what has been done in the field of Human Computer Interaction (HCI) in terms of incorporating the type channels mentioned above, we reviewed the literature on implicit nonverbal interaction with a specific emphasis on the interaction between humans on the one side, and robot and virtual humans on the other side. These Artificial Social Agents (ASA) are increasingly used as advanced tools for solving not only physical but also social tasks. In the literature review, we identify domains of interaction between humans and artificial social agents that have shown exponential growth over the years. The review highlights the value of incorporating implicit interaction capabilities in Human Agent Interaction (HAI) which we believe will lead to satisfying human and artificial social agent team performance. We conclude the article by presenting a case study of a system that harnesses subtle nonverbal, implicit interaction to increase the state of relaxation in users. This “Virtual Human Breathing Relaxation System” works on the principle of physiological synchronisation between a human and a virtual, computer-generated human. The active entrainment concept behind the relaxation system is generic and can be applied to other human agent interaction domains of implicit physiology-based interaction

    Towards accessible mental healthcare through augmented reality and self-assessment tools

    Get PDF
    Mental health presents a growing public health concern worldwide with mental illnesses affecting people's quality of life and causing an economic impact on societies. The rapidly increasing demand for mental healthcare is calling for new ways of disseminating mental health knowledge and for supporting people with mental health illnesses. As an alternative to traditional mental health therapies and treatments, mental health self-assessment and self-management tools become widely available to the public. While such tools can potentially offer more timely personalised support, individuals seeking help are faced with the challenge of making an appropriate choice from an exhaustive number of online tools, mobile apps, and support programs. In this article, we present myGRaCE-a self-assessment and self-management mental health tool made accessible to users via Augmented Reality technologies. The advantage of the system is that it provides a direct pathway to relevant and reliable mental health resources and offers a positive incentive and interventions for at-risk users. To investigate the usability and intuitiveness of the system, we conducted a pilot evaluation study with 10 participants. The results showed that the majority of study participants found the system intuitive and easy to use

    An Interactive Space as a Creature:Mechanisms of Agency Attribution and Autotelic Experience

    Get PDF
    Interacting with an animal is a highly immersing and satisfactory experience. How can interaction with an artifact can be imbued with the quality of an interaction with a living being? The authors propose a theoretical relationship that puts the predictability of the human-artifact interaction at the center of the attribution of agency and experience of “flow.” They empirically explored three modes of interaction that differed in the level of predictability of the interactive space's behavior. The results of the authors' study give support to the notion that there is a sweet spot of predictability in the reactions of the space that leads users to perceive the space as a creature. Flow factors discriminated between the different modes of interaction and showed the expected nonlinear relationship with the predictability of the interaction. The authors' results show that predictability is a key factor to induce an attribution of agency, and they hope that their study can contribute to a more systematic approach to designing satisfactory and rich interaction between humans and machines

    Axial Generation: Mixing Colour and Shapes to Automatically Form Diverse Digital Sculptures

    Get PDF
    Abstract: Automated computer generation of aesthetically pleasing artwork has been the subject of research for several decades. The unsolved problem of interest is how to please any audience without requiring too much of their involvement in the process of creation. Two-dimensional pictures have received a lot of attention; however, 3D artwork has remained relatively unexplored. This paper showcases an extended version of the Axial Generation Process (AGP), a versatile generation algorithm that can create both 2D and 3D items within the Concretism art style. The extensions presented here include calculating colour values for the artwork, increasing the range of forms that can be created through dynamic sizing of shapes and including more primitive shape types, finally, 2D items can be created from multiple viewpoints. Both 2D and 3D items generated through the AGP were evaluated against a set of formal aesthetic measures and compared against two established generation systems, one based on manipulating pixels/voxels and another tracking the path of particles through 2D and 3D space. This initial evaluation shows that the process is capable of generating visually varied items which exhibit a generally diverse range of values across the measures used, in both two and three dimensions. Comparatively, against the established generation processes, the AGP shows a good balance of performance and ability to create complex and visually varied items

    Expressing Personality Through Non-verbal Behaviour in Real-Time Interaction

    Get PDF
    The attribution of traits plays an important role as a heuristic for how we interact with others. Many psychological models of personality are analytical in that they derive a classification from reported or hypothesised behaviour. In the work presented here, we follow the opposite approach: Our personality model generates behaviour that leads an observer to attribute personality characteristics to the actor. Concretely, the model controls all relevant aspects of non-verbal behaviour such as gaze, facial expression, gesture, and posture. The model, embodied in a virtual human, affords to realistically interact with participants in real-time. Conceptually, our model focuses on the two dimensions of extra/introversion and stability/neuroticism. In the model, personality parameters influence both, the internal affective state as well as the characteristic of the behaviour execution. Importantly, the parameters of the model are based on empirical findings in the behavioural sciences. To evaluate our model, we conducted two types of studies. Firstly, passive experiments where participants rated videos showing variants of behaviour driven by different personality parameter configurations. Secondly, presential experiments where participants interacted with the virtual human, playing rounds of the Rock-Paper-Scissors game. Our results show that the model is effective in conveying the impression of the personality of a virtual character to users. Embodying the model in an artificial social agent capable of real-time interactive behaviour is the only way to move from an analytical to a generative approach to understanding personality, and we believe that this methodology raises a host of novel research questions in the field of personality theory

    Speech Breathing in Virtual Humans: An Interactive Model and Empirical Study

    Get PDF
    Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking - speech breathing - into account. We believe that integrating dynamic speech breathing systems in virtual characters can significantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of low-and high-level parameters, the system produces dynamic signals in real-time that control the virtual character's anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing). In addition, we perform a study to determine the effects of including breathing-motivated speech movements, such as head tilts and chest expansions during dialogue on a virtual character, as well as breathing sounds. This study includes speech that is generated both from a text-to-speech engine as well as from recorded voice

    Assessing the reliability of the Laban Movement Analysis system

    Get PDF
    The Laban Movement Analysis system (LMA) is a widely used system for the description of human movement. Here we present results of an empirical analysis of the reliability of the LMA system. Firstly, we developed a directed graph-based representation for the formalization of LMA. Secondly, we implemented a custom video annotation tool for stimulus presentation and annotation of the formalized LMA. Using these two elements, we conducted an experimental assessment of LMA reliability. In the experimental assessment of the reliability, experts–Certified Movement Analysts (CMA)–were tasked with identifying the differences between a “neutral” movement and the same movement executed with a specific variation in one of the dimensions of the LMA parameter space. The videos represented variations on the pantomimed movement of knocking at a door or giving directions. To be as close as possible to the annotation practice of CMAs, participants were given full control over the number of times and order in which they viewed the videos. The LMA annotation was captured by means of the video annotation tool that guided the participants through the LMA graph by asking them multiple-choice questions at each node. Participants were asked to first annotate the most salient difference (round 1), and then the second most salient one (round 2) between a neutral and gesture and the variation. To quantify the overall reliability of LMA, we computed Krippendorff’s α. The quantitative data shows that the reliability, depending on how the two rounds are integrated, ranges between a weak and an acceptable reliability of LMA. The analysis of viewing behavior showed that, despite relatively large differences at the inter-individual level, there is no simple relationship between viewing behavior and individual performance (quantified as the level of agreement of the individual with the dominant rating). This research advances the state of the art in formalizing and implementing a reliability measure for the Laban Movement Analysis system. The experimental study we conducted allows identifying some of the strengths and weaknesses of the widely used movement coding system. Additionally, we have gained useful insights into the assessment procedure itself
    corecore